49 research outputs found

    Interventions to develop an improvement culture within nonprofit organization: The case of Saudi Arabia

    Get PDF
    It has been confirmed that organizational culture has a remarkable impact on facilitating continuous improvement. Nonetheless, little empirical research has investigated how organizational culture can facilitate continuous improvement. Therefore, this paper asks what interventions facilitate a continuous improvement culture within nonprofit organizations. Qualitative data are suggested as an appropriate method for answering the research question. The present research adopted an interpretive paradigm whereby reality, treated as a subjective and multiple entity that is “socially constructed”, can be mentally explored from the participants' perspectives. Grounded theory was the chosen approach for collecting and analysing the qualitative data; thus, the constructed theories were ‘grounded’ in the data themselves. Thirty one interviews in fifteen nonprofit organizations yielded data which, when analysed revealed a number of interventions, developed by the participants during five focus group discussions

    Developing a framework to facilitate an improvement culture: the case of Saudi Arabia

    Get PDF
    This research explores aspects of organizational culture to facilitate continuous improvement within nonprofit organizations. Research shows that organizational culture plays a significant role in driving organizations and that they benefit from continuous improvement. The nonprofit sector contributes much to the economy and well-being, but is still often neglected; hence, Saudi nonprofit organizations are here the location for building a framework that promotes a culture of continuous improvement. In this qualitative research, grounded theory is the chosen approach. Eighteen interviews in nine organizations yielded data which, when analysed revealed forty emergent factors, classifiable into six initial themes developed by focus group participants. However, synthesising the framework is still in progress

    Mode-division-multiplexing of multiple Bessel-Gaussian beams carrying orbital-angular-momentum for obstruction-tolerant free-space optical and millimetre-wave communication links

    Get PDF
    We experimentally investigate the potential of using ‘self-healing’ Bessel-Gaussian beams carrying orbital-angular-momentum to overcome limitations in obstructed free-space optical and 28-GHz millimetre-wave communication links. We multiplex and transmit two beams (l = +1 and +3) over 1.4 metres in both the optical and millimetre-wave domains. Each optical beam carried 50-Gbaud quadrature-phase-shift-keyed data, and each millimetre-wave beam carried 1-Gbaud 16-quadrature-amplitude-modulated data. In both types of links, opaque disks of different sizes are used to obstruct the beams at different transverse positions. We observe self-healing after the obstructions, and assess crosstalk and power penalty when data is transmitted. Moreover, we show that Bessel-Gaussian orbital-angular-momentum beams are more tolerant to obstructions than non-Bessel orbital-angular-momentum beams. For example, when obstructions that are 1 and 0.44 the size of the l = +1 beam, are placed at beam centre, optical and millimetre-wave Bessel-Gaussian beams show ~6 dB and ~8 dB reduction in crosstalk, respectively

    Roadmap on all-optical processing

    Get PDF
    The ability to process optical signals without passing into the electrical domain has always attracted the attention of the research community. Processing photons by photons unfolds new scenarios, in principle allowing for unseen signal processing and computing capabilities. Optical computation can be seen as a large scientific field in which researchers operate, trying to find solutions to their specific needs by different approaches; although the challenges can be substantially different, they are typically addressed using knowledge and technological platforms that are shared across the whole field. This significant know-how can also benefit other scientific communities, providing lateral solutions to their problems, as well as leading to novel applications. The aim of this Roadmap is to provide a broad view of the state-of-the-art in this lively scientific research field and to discuss the advances required to tackle emerging challenges, thanks to contributions authored by experts affiliated to both academic institutions and high-tech industries. The Roadmap is organized so as to put side by side contributions on different aspects of optical processing, aiming to enhance the cross-contamination of ideas between scientists working in three different fields of photonics: optical gates and logical units, high bit-rate signal processing and optical quantum computing. The ultimate intent of this paper is to provide guidance for young scientists as well as providing research-funding institutions and stake holders with a comprehensive overview of perspectives and opportunities offered by this research field

    Deep Learning-Based Image Denoising Approach for the Identification of Structured Light Modes in Dusty Weather

    No full text
    Structured light is gaining importance in free-space communication. Classifying spatially-structured light modes is challenging in a dusty environment because of the distortion on the propagating beams. This article addresses this challenge by proposing a deep learning convolutional autoencoder algorithm for modes denoising followed by a neural network for modes classification. The input to the classifier was set to be either the denoised image or the latent code of the convolutional autoencoder. This code is a low-dimensional representation of the inputted images. The proposed machine learning (ML) models were trained and tested using laboratory-generated mode data sets from the Laguerre and Hermite Gaussian mode bases. The results show that the two proposed approaches achieve an average classification accuracy exceeding 98%, and both are better than the classification accuracy reported recently (83–91%) in the literature

    Sagnac Loop Based Sensing System for Intrusion Localization Using Machine Learning

    No full text
    Among all optical sensing techniques, the distributed Sagnac loop (SI) sensor has the advantage of being simple to implement with low cost. Most of the proposed techniques for using SI exploit the frequency null method for event localization. However, such a technique suffers from the low spectrum signal power, complicating event localization under environmental noise. In this work, event localization using time-domain instead of frequency null signals is achieved using machine learning (ML), which is increasingly being exploited in many science fields, including sensing applications. First, a training dataset that includes 200 events is generated over a 50 km effective sensing fiber. These time-domain signals are considered as features for training the ML algorithm. Then, the random forest (RF) ML algorithm is used to develop a model for event location prediction. The results show the capability of ML in predicting the event’s location with 55 m mean absolute error (MAE). Further, the percentage of test realizations with prediction error > 200 m is 0.7%. The sensing signal bandwidth is investigated, showing better performance results for sensing signals of larger bandwidths. Finally, the proposed model is validated experimentally. The results showed good accuracy with MAE < 100 m

    Photoplethysmography Data Reduction Using Truncated Singular Value Decomposition and Internet of Things Computing

    No full text
    Biometric-based identity authentication is integral to modern-day technologies. From smart phones, personal computers, and tablets to security checkpoints, they all utilize a form of identity check based on methods such as face recognition and fingerprint-verification. Photoplethysmography (PPG) is another form of biometric-based authentication that has recently been gaining momentum, because it is effective and easy to implement. This paper considers a cloud-based system model for PPG-authentication, where the PPG signals of various individuals are collected with distributed sensors and communicated to the cloud for authentication. Such a model incursarge signal traffic, especially in crowded places such as airport security checkpoints. This motivates the need for a compression–decompression scheme (or a Codec for short). The Codec is required to reduce the data traffic by compressing each PPG signal before it is communicated, i.e., encoding the signal right after it comes off the sensor and before it is sent to the cloud to be reconstructed (i.e., decoded). Therefore, the Codec has two system requirements to meet: (i) produce high-fidelity signal reconstruction; and (ii) have a computationallyightweight encoder. Both requirements are met by the Codec proposed in this paper, which is designed using truncated singular value decomposition (T-SVD). The proposed Codec is developed and tested using a publicly available dataset of PPG signals collected from multiple individuals, namely the CapnoBase dataset. It is shown to achieve a 95% compression ratio and a 99% coefficient of determination. This means that the Codec is capable of delivering on the first requirement, high-fidelity reconstruction, while producing highly compressed signals. Those compressed signals do not require heavy computations to be produced as well. An implementation on a single-board computer is attempted for the encoder, showing that the encoder can average 300 milliseconds per signal on a Raspberry Pi 3. This is enough time to encode a PPG signal prior to transmission to the cloud
    corecore